Frame-Oriented Architecture
AlphaNet: Scaling Up Local Frame-based Atomistic Foundation Model
Yin, Bangchen, Wang, Jiaao, Du, Weitao, Wang, Pengbo, Ying, Penghua, Jia, Haojun, Zhang, Zisheng, Du, Yuanqi, Gomes, Carla P., Duan, Chenru, Xiao, Hai, Henkelman, Graeme
We present AlphaNet, a local frame-based equivariant model designed to achieve both accurate and efficient simulations for atomistic systems. Recently, machine learning force fields (MLFFs) have gained prominence in molecular dynamics simulations due to their advantageous efficiency-accuracy balance compared to classical force fields and quantum mechanical calculations, alongside their transferability across various systems. Despite the advancements in improving model accuracy, the efficiency and scalability of MLFFs remain significant obstacles in practical applications. AlphaNet enhances computational efficiency and accuracy by leveraging the local geometric structures of atomic environments through the construction of equivariant local frames and learnable frame transitions. We substantiate the efficacy of AlphaNet across diverse datasets, including defected graphene, formate decomposition, zeolites, and surface reactions. AlphaNet consistently surpasses well-established models, such as NequIP and DeepPot, in terms of both energy and force prediction accuracy. Notably, AlphaNet offers one of the best trade-offs between computational efficiency and accuracy among existing models. Moreover, AlphaNet exhibits scalability across a broad spectrum of system and dataset sizes, affirming its versatility.
EF-Calib: Spatiotemporal Calibration of Event- and Frame-Based Cameras Using Continuous-Time Trajectories
Wang, Shaoan, Xin, Zhanhua, Hu, Yaoqing, Li, Dongyue, Zhu, Mingzhu, Yu, Junzhi
Event camera, a bio-inspired asynchronous triggered camera, offers promising prospects for fusion with frame-based cameras owing to its low latency and high dynamic range. However, calibrating stereo vision systems that incorporate both event and frame-based cameras remains a significant challenge. In this letter, we present EF-Calib, a spatiotemporal calibration framework for event- and frame-based cameras using continuous-time trajectories. A novel calibration pattern applicable to both camera types and the corresponding event recognition algorithm is proposed. Leveraging the asynchronous nature of events, a derivable piece-wise B-spline to represent camera pose continuously is introduced, enabling calibration for intrinsic parameters, extrinsic parameters, and time offset, with analytical Jacobians provided. Various experiments are carried out to evaluate the calibration performance of EF-Calib, including calibration experiments for intrinsic parameters, extrinsic parameters, and time offset. Experimental results show that EF-Calib achieves the most accurate intrinsic parameters compared to current SOTA, the close accuracy of the extrinsic parameters compared to the frame-based results, and accurate time offset estimation. EF-Calib provides a convenient and accurate toolbox for calibrating the system that fuses events and frames. The code of this paper will also be open-sourced at: https://github.com/wsakobe/EF-Calib.
eWand: A calibration framework for wide baseline frame-based and event-based camera systems
Gossard, Thomas, Ziegler, Andreas, Kolmar, Levin, Tebbe, Jonas, Zell, Andreas
Accurate calibration is crucial for using multiple cameras to triangulate the position of objects precisely. However, it is also a time-consuming process that needs to be repeated for every displacement of the cameras. The standard approach is to use a printed pattern with known geometry to estimate the intrinsic and extrinsic parameters of the cameras. The same idea can be applied to event-based cameras, though it requires extra work. By using frame reconstruction from events, a printed pattern can be detected. A blinking pattern can also be displayed on a screen. Then, the pattern can be directly detected from the events. Such calibration methods can provide accurate intrinsic calibration for both frame- and event-based cameras. However, using 2D patterns has several limitations for multi-camera extrinsic calibration, with cameras possessing highly different points of view and a wide baseline. The 2D pattern can only be detected from one direction and needs to be of significant size to compensate for its distance to the camera. This makes the extrinsic calibration time-consuming and cumbersome. To overcome these limitations, we propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern. Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras.
A Comparison between Frame-based and Event-based Cameras for Flapping-Wing Robot Perception
Tapia, Raul, Rodrรญguez-Gรณmez, Juan Pablo, Sanchez-Diaz, Juan Antonio, Gaรฑรกn, Francisco Javier, Rodrรญguez, Ivรกn Gutierrez, Luna-Santamaria, Javier, Dios, Josรฉ Ramiro Martรญnez-de, Ollero, Anibal
Perception systems for ornithopters face severe challenges. The harsh vibrations and abrupt movements caused during flapping are prone to produce motion blur and strong lighting condition changes. Their strict restrictions in weight, size, and energy consumption also limit the type and number of sensors to mount onboard. Lightweight traditional cameras have become a standard off-the-shelf solution in many flapping-wing designs. However, bioinspired event cameras are a promising solution for ornithopter perception due to their microsecond temporal resolution, high dynamic range, and low power consumption. This paper presents an experimental comparison between frame-based and an event-based camera. Both technologies are analyzed considering the particular flapping-wing robot specifications and also experimentally analyzing the performance of well-known vision algorithms with data recorded onboard a flapping-wing robot. Our results suggest event cameras as the most suitable sensors for ornithopters. Nevertheless, they also evidence the open challenges for event-based vision on board flapping-wing robots.
Real-time event simulation with frame-based cameras
Ziegler, Andreas, Teigland, Daniel, Tebbe, Jonas, Gossard, Thomas, Zell, Andreas
Event cameras are becoming increasingly popular in robotics and computer vision due to their beneficial properties, e.g., high temporal resolution, high bandwidth, almost no motion blur, and low power consumption. However, these cameras remain expensive and scarce in the market, making them inaccessible to the majority. Using event simulators minimizes the need for real event cameras to develop novel algorithms. However, due to the computational complexity of the simulation, the event streams of existing simulators cannot be generated in real-time but rather have to be pre-calculated from existing video sequences or pre-rendered and then simulated from a virtual 3D scene. Although these offline generated event streams can be used as training data for learning tasks, all response time dependent applications cannot benefit from these simulators yet, as they still require an actual event camera. This work proposes simulation methods that improve the performance of event simulation by two orders of magnitude (making them real-time capable) while remaining competitive in the quality assessment.
Baral
In this paper we encode some of the reasoning methods used in frame based knowledge representation languages in answer set programming (ASP). In particular, we show how cloning'' and unification'' in frame based systems can be encoded in ASP. We then show how some of the types of queries with respect to a biological knowledge base can be encoded using our methodology. We also provide insight on how the reasoning can be done more efficiently when dealing with a huge knowledge base.
How the GenForward Poll of Young Americans Was Conducted
The original sample was drawn from two sources. Fifty-one percent of respondents are part of NORC's AmeriSpeak panel, which was selected randomly from NORC's National Frame based on address-based sampling and recruited by mail, email, telephone and face-to-face interviews. Forty-nine percent of respondents are part of a custom panel of young adults that uses an address-based sample from a registered voter database of the entire U.S and is recruited by mail and telephone.
How the GenForward poll of young Americans was conducted
The original sample was drawn from two sources. Forty-five percent of respondents are part of NORC's AmeriSpeak panel, which was selected randomly from NORC's National Frame based on address-based sampling and recruited by mail, email, telephone and face-to-face interviews. Fifty-five percent of respondents are part of a custom panel of young adults that uses an address-based sample from a registered voter database of the entire U.S and is recruited by mail and telephone.
How the AP-NORC poll on drugs was conducted
The Associated Press-NORC Center for Public Affairs Research poll on drugs and substance abuse was conducted by NORC Feb. 11-14. It is based on online and telephone interviews of 1,042 adults who are members of NORC's nationally representative AmeriSpeak panel. The original sample was drawn from respondents selected randomly from NORC's National Frame based on address-based sampling and recruited by mail, email, telephone and face-to-face interviews. NORC interviews participants over the phone if they don't have Internet access. With a probability basis and coverage of people who can't access the Internet, surveys using AmeriSpeak are nationally representative.